# MLX framework optimization
Mistral Small 3.2 24B Instruct 2506 Bf16
Apache-2.0
This is an MLX format model converted from Mistral-Small-3.2-24B-Instruct-2506, suitable for instruction following tasks.
Large Language Model Supports Multiple Languages
M
mlx-community
163
1
SWE Agent LM 32B 4bit
Apache-2.0
This is a 4-bit quantized version converted from the SWE-bench/SWE-agent-LM-32B model, specifically optimized for software engineering tasks.
Large Language Model
Transformers English

S
mlx-community
31
1
Gemma 3 12b It Qat 4bit
Other
MLX format model converted from google/gemma-3-12b-it-qat-q4_0-unquantized, supporting image-text generation tasks
Text-to-Image
Transformers Other

G
mlx-community
984
5
Gemma 3 4b It Qat 4bit
Other
Gemma 3 4B IT QAT 4bit is a 4-bit quantized large language model trained with Quantization-Aware Training (QAT), based on the Gemma 3 architecture and optimized for the MLX framework.
Image-to-Text
Transformers Other

G
mlx-community
607
1
Llama 4 Maverick 17B 16E Instruct 4bit
Other
A 4-bit quantized model converted from meta-llama/Llama-4-Maverick-17B-128E-Instruct, supporting multilingual text generation tasks
Large Language Model Supports Multiple Languages
L
mlx-community
538
6
Llama 3.2 11B Vision Instruct Abliterated 8 Bit
This is a multimodal model based on Llama-3.2-11B-Vision-Instruct, which supports image and text input and generates text output.
Image-to-Text
Transformers Supports Multiple Languages

L
mlx-community
128
0
Featured Recommended AI Models